126 research outputs found

    Transient error mitigation by means of approximate logic circuits

    Get PDF
    Mención Internacional en el título de doctorThe technological advances in the manufacturing of electronic circuits have allowed to greatly improve their performance, but they have also increased the sensitivity of electronic devices to radiation-induced errors. Among them, the most common effects are the SEEs, i.e., electrical perturbations provoked by the strike of high-energy particles, which may modify the internal state of a memory element (SEU) or generate erroneous transient pulses (SET), among other effects. These events pose a threat for the reliability of electronic circuits, and therefore fault-tolerance techniques must be applied to deal with them. The most common fault-tolerance techniques are based in full replication (DWC or TMR). These techniques are able to cover a wide range of failure mechanisms present in electronic circuits. However, they suffer from high overheads in terms of area and power consumption. For this reason, lighter alternatives are often sought at the expense of slightly reducing reliability for the least critical circuit sections. In this context a new paradigm of electronic design is emerging, known as approximate computing, which is based on improving the circuit performance in change of slight modifications of the intended functionality. This is an interesting approach for the design of lightweight fault-tolerant solutions, which has not been yet studied in depth. The main goal of this thesis consists in developing new lightweight fault-tolerant techniques with partial replication, by means of approximate logic circuits. These circuits can be designed with great flexibility. This way, the level of protection as well as the overheads can be adjusted at will depending on the necessities of each application. However, finding optimal approximate circuits for a given application is still a challenge. In this thesis a method for approximate circuit generation is proposed, denoted as fault approximation, which consists in assigning constant logic values to specific circuit lines. On the other hand, several criteria are developed to generate the most suitable approximate circuits for each application, by using this fault approximation mechanism. These criteria are based on the idea of approximating the least testable sections of circuits, which allows reducing overheads while minimising the loss of reliability. Therefore, in this thesis the selection of approximations is linked to testability measures. The first criterion for fault selection developed in this thesis uses static testability measures. The approximations are generated from the results of a fault simulation of the target circuit, and from a user-specified testability threshold. The amount of approximated faults depends on the chosen threshold, which allows to generate approximate circuits with different performances. Although this approach was initially intended for combinational circuits, an extension to sequential circuits has been performed as well, by considering the flip-flops as both inputs and outputs of the combinational part of the circuit. The experimental results show that this technique achieves a wide scalability, and an acceptable trade-off between reliability versus overheads. In addition, its computational complexity is very low. However, the selection criterion based in static testability measures has some drawbacks. Adjusting the performance of the generated approximate circuits by means of the approximation threshold is not intuitive, and the static testability measures do not take into account the changes as long as faults are approximated. Therefore, an alternative criterion is proposed, which is based on dynamic testability measures. With this criterion, the testability of each fault is computed by means of an implication-based probability analysis. The probabilities are updated with each new approximated fault, in such a way that on each iteration the most beneficial approximation is chosen, that is, the fault with the lowest probability. In addition, the computed probabilities allow to estimate the level of protection against faults that the generated approximate circuits provide. Therefore, it is possible to generate circuits which stick to a target error rate. By modifying this target, circuits with different performances can be obtained. The experimental results show that this new approach is able to stick to the target error rate with reasonably good precision. In addition, the approximate circuits generated with this technique show better performance than with the approach based in static testability measures. In addition, the fault implications have been reused too in order to implement a new type of logic transformation, which consists in substituting functionally similar nodes. Once the fault selection criteria have been developed, they are applied to different scenarios. First, an extension of the proposed techniques to FPGAs is performed, taking into account the particularities of this kind of circuits. This approach has been validated by means of radiation experiments, which show that a partial replication with approximate circuits can be even more robust than a full replication approach, because a smaller area reduces the probability of SEE occurrence. Besides, the proposed techniques have been applied to a real application circuit as well, in particular to the microprocessor ARM Cortex M0. A set of software benchmarks is used to generate the required testability measures. Finally, a comparative study of the proposed approaches with approximate circuit generation by means of evolutive techniques have been performed. These approaches make use of a high computational capacity to generate multiple circuits by trial-and-error, thus reducing the possibility of falling into local minima. The experimental results demonstrate that the circuits generated with evolutive approaches are slightly better in performance than the circuits generated with the techniques here proposed, although with a much higher computational effort. In summary, several original fault mitigation techniques with approximate logic circuits are proposed. These approaches are demonstrated in various scenarios, showing that the scalability and adaptability to the requirements of each application are their main virtuesLos avances tecnológicos en la fabricación de circuitos electrónicos han permitido mejorar en gran medida sus prestaciones, pero también han incrementado la sensibilidad de los mismos a los errores provocados por la radiación. Entre ellos, los más comunes son los SEEs, perturbaciones eléctricas causadas por el impacto de partículas de alta energía, que entre otros efectos pueden modificar el estado de los elementos de memoria (SEU) o generar pulsos transitorios de valor erróneo (SET). Estos eventos suponen un riesgo para la fiabilidad de los circuitos electrónicos, por lo que deben ser tratados mediante técnicas de tolerancia a fallos. Las técnicas de tolerancia a fallos más comunes se basan en la replicación completa del circuito (DWC o TMR). Estas técnicas son capaces de cubrir una amplia variedad de modos de fallo presentes en los circuitos electrónicos. Sin embargo, presentan un elevado sobrecoste en área y consumo. Por ello, a menudo se buscan alternativas más ligeras, aunque no tan efectivas, basadas en una replicación parcial. En este contexto surge una nueva filosofía de diseño electrónico, conocida como computación aproximada, basada en mejorar las prestaciones de un diseño a cambio de ligeras modificaciones de la funcionalidad prevista. Es un enfoque atractivo y poco explorado para el diseño de soluciones ligeras de tolerancia a fallos. El objetivo de esta tesis consiste en desarrollar nuevas técnicas ligeras de tolerancia a fallos por replicación parcial, mediante el uso de circuitos lógicos aproximados. Estos circuitos se pueden diseñar con una gran flexibilidad. De este forma, tanto el nivel de protección como el sobrecoste se pueden regular libremente en función de los requisitos de cada aplicación. Sin embargo, encontrar los circuitos aproximados óptimos para cada aplicación es actualmente un reto. En la presente tesis se propone un método para generar circuitos aproximados, denominado aproximación de fallos, consistente en asignar constantes lógicas a ciertas líneas del circuito. Por otro lado, se desarrollan varios criterios de selección para, mediante este mecanismo, generar los circuitos aproximados más adecuados para cada aplicación. Estos criterios se basan en la idea de aproximar las secciones menos testables del circuito, lo que permite reducir los sobrecostes minimizando la perdida de fiabilidad. Por tanto, en esta tesis la selección de aproximaciones se realiza a partir de medidas de testabilidad. El primer criterio de selección de fallos desarrollado en la presente tesis hace uso de medidas de testabilidad estáticas. Las aproximaciones se generan a partir de los resultados de una simulación de fallos del circuito objetivo, y de un umbral de testabilidad especificado por el usuario. La cantidad de fallos aproximados depende del umbral escogido, lo que permite generar circuitos aproximados con diferentes prestaciones. Aunque inicialmente este método ha sido concebido para circuitos combinacionales, también se ha realizado una extensión a circuitos secuenciales, considerando los biestables como entradas y salidas de la parte combinacional del circuito. Los resultados experimentales demuestran que esta técnica consigue una buena escalabilidad, y unas prestaciones de coste frente a fiabilidad aceptables. Además, tiene un coste computacional muy bajo. Sin embargo, el criterio de selección basado en medidas estáticas presenta algunos inconvenientes. No resulta intuitivo ajustar las prestaciones de los circuitos aproximados a partir de un umbral de testabilidad, y las medidas estáticas no tienen en cuenta los cambios producidos a medida que se van aproximando fallos. Por ello, se propone un criterio alternativo de selección de fallos, basado en medidas de testabilidad dinámicas. Con este criterio, la testabilidad de cada fallo se calcula mediante un análisis de probabilidades basado en implicaciones. Las probabilidades se actualizan con cada nuevo fallo aproximado, de forma que en cada iteración se elige la aproximación más favorable, es decir, el fallo con menor probabilidad. Además, las probabilidades calculadas permiten estimar la protección frente a fallos que ofrecen los circuitos aproximados generados, por lo que es posible generar circuitos que se ajusten a una tasa de fallos objetivo. Modificando esta tasa se obtienen circuitos aproximados con diferentes prestaciones. Los resultados experimentales muestran que este método es capaz de ajustarse razonablemente bien a la tasa de fallos objetivo. Además, los circuitos generados con esta técnica muestran mejores prestaciones que con el método basado en medidas estáticas. También se han aprovechado las implicaciones de fallos para implementar un nuevo tipo de transformación lógica, consistente en sustituir nodos funcionalmente similares. Una vez desarrollados los criterios de selección de fallos, se aplican a distintos campos. En primer lugar, se hace una extensión de las técnicas propuestas para FPGAs, teniendo en cuenta las particularidades de este tipo de circuitos. Esta técnica se ha validado mediante experimentos de radiación, los cuales demuestran que una replicación parcial con circuitos aproximados puede ser incluso más robusta que una replicación completa, ya que un área más pequeña reduce la probabilidad de SEEs. Por otro lado, también se han aplicado las técnicas propuestas en esta tesis a un circuito de aplicación real, el microprocesador ARM Cortex M0, utilizando un conjunto de benchmarks software para generar las medidas de testabilidad necesarias. Por ´último, se realiza un estudio comparativo de las técnicas desarrolladas con la generación de circuitos aproximados mediante técnicas evolutivas. Estas técnicas hacen uso de una gran capacidad de cálculo para generar múltiples circuitos mediante ensayo y error, reduciendo la posibilidad de caer en algún mínimo local. Los resultados confirman que, en efecto, los circuitos generados mediante técnicas evolutivas son ligeramente mejores en prestaciones que con las técnicas aquí propuestas, pero con un coste computacional mucho mayor. En definitiva, se proponen varias técnicas originales de mitigación de fallos mediante circuitos aproximados. Se demuestra que estas técnicas tienen diversas aplicaciones, haciendo de la flexibilidad y adaptabilidad a los requisitos de cada aplicación sus principales virtudes.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Raoul Velazco.- Secretario: Almudena Lindoso Muñoz.- Vocal: Jaume Segura Fuste

    Diseño de un módulo I-IP para la detección de errores transitorios en sistemas embebidos

    Get PDF
    El objetivo del presente proyecto es diseñar un módulo IP que se conecte al bus de comunicaciones de un sistema embebido de forma no intrusiva y que, con las adecuadas modificaciones del software que se ejecuta, sea capaz de detectar errores transitorios en el sistema de forma no intrusiva. La solución propuesta debe ser eficiente, es decir, debe ser capaz de detectar una proporción de errores aceptable. Además, debe afectar lo menos posible a las prestaciones del procesador y debe ser económico, tanto en coste como en espacio. Dentro del proyecto se incluye también la validación del módulo y de sus capacidades de detección. En concreto, la finalidad del módulo IP es supervisar las comunicaciones con la memoria u otros periféricos que funcionen como memorias. Dentro del proyecto OPTIMISE se han desarrollado módulos complementarios para otros componentes. Para la consecución de estos objetivos se toma como vehículo de pruebas el LEON3, un microprocesador desarrollado por Gaisler Research. Esta elección se debe a que el LEON3 es un microprocesador sintetizable, es decir, que su funcionalidad está descrita en una serie de diseños desarrollados en un lenguaje de descripción de hardware, en concreto VHDL. Además estos diseños están disponibles en código fuente bajo licencia gratuita. Por último, se trata de un microprocesador de 32 bits con aplicaciones reales en la industria aeroespacial hoy día.Ingeniería Industria

    Isópodos, tanaidáceos y cumáceos (Crustacea, Peracarida) asociados al alga Stypocaulon scoparium en la Península Ibérica

    Get PDF
    The distribution and abundance patterns of isopods, tanaids and cumaceans (Crus - tacea: Peracarida) associated with the alga Stypocaulon scoparium (L.) Kützing were studied. Fourteen stations were selected along the Iberian Peninsula and five environmental factors were measured (seawater temperature, conductivity, dissolved oxygen, turbidity and pH). The Atlantic coast was characterised by lower temperature and conductivity and higher values of oxygen and turbidity than the Mediterranean coast. Cover of S. scoparium was higher in the Strait of Gibraltar than in the remaining stations, coinciding with maximum values of number of peracaridean species. Twenty three species were collected (15 isopods, 4 tanaids and 4 cumaceans). Isopods were more abundant in Atlantic stations of the Iberian Peninsula while tanaids and cumaceans were dominant in the Mediterranean coast. The classif ication of species in geographical distribution groups showed that most species had an Atlantic-Mediterranean distribution (76%) and only 9% were endemic Mediterranean species. Multivariate analy - sis showed that distribution of species was mainly correlated to temperature, conductivity and oxygen, although the cover of S. scoparium also influenced the abundances of some taxa.Se estudiaron los patrones de distribución y abundancia de isópodos, tanaidáceos y cumáceos (Crustacea: Peracarida) asociados al alga Stypocaulon scoparium (L.) Kützing. Se seleccionaron catorce estaciones a lo largo de la Península Ibérica y se midieron cinco variables ambientales (temperatura del agua, con - ductividad, oxígeno disuelto, turbidez y pH). La costa atlántica mostró valores más bajos de temperatura y conductividad y valores más altos de oxígeno y turbidez que la costa mediterránea. La cobertura de S. scoparium fue superior en el Estrecho de Gibraltar que en las estaciones restantes, coincidiendo con los valores máximos del número de especies de peracáridos. Se recolectaron 23 especies (15 isópodos, 4 tanaidáceos y 4 cumáceos). Los isópodos fueron más abundantes en las estaciones atlánticas de la Península Ibérica mientras que tanaidáceos y cumáceos fueron dominantes en la costa mediterránea. La clasif icación de las especies en grupos biogeográficos mostró que la mayoría de especies tenían distribución atlántico-mediterránea (76%) y sólo un 9% fueron endemismos del Mediterráneo. Los análsisis multivariantes mostraron que la distribución de las especies estuvo correlacionada fundamentalmente con la temperatura, conductividad y oxígeno, aunque la cobertura de S. scoparium también influyó en la abundancia de algunos taxones

    Partial TMR in FPGAs Using Approximate Logic Circuits

    Get PDF
    TMR is a very effective technique to mitigate SEU effects in FPGAs, but it is often expensive in terms of FPGA resource utilization and power consumption. For certain applications, Partial TMR can be used to trade off the reliability with the cost of mitigation. In this work we propose a new approach to build Partial TMR circuits for FPGAs using approximate logic circuits. This approach is scalable, with a fine granularity, and can provide a flexible balance between reliability and overheads. The proposed approach has been validated by the results of fault injection experiments and proton irradiation campaigns.This work was supported in part by the Spanish Ministry of Economy and Competitiveness under contract ESP2015-68245-C4-1-P

    Environmental change rate and dispersion pattern modulate the dynamics of evolutionary rescue of the cyanobacterium Microcystis aeruginosa

    Get PDF
    The rate of biodiversity loss is so high that some scientists affirm that we are being witnesses of the sixth mass extinction. In this situation, it is necessary to ask the following question: can the organisms be able to resist the environmental changes that are taking place? Recent studies have shown the possibility of a population recovering from a stress situation through evolutionary rescue (ER) events. These events depend on the size of the population, its previous history and the rate of the environmental change. The aim of this work is to add more knowledge about the ER dynamics creating stress situations with selective agents (sulphur and salinity) and using the toxic cyanobacterium Microcystis aeruginosa as a model organism. The experiments are based on exposing populations to severe stress and analyze the effect of previous dispersal events and deterioration rates on the occurrence of ER events among populations. The model consists in three different rates of environmental change (constant, slow and fast; under salinity stress we only used the first two treatments) and three dispersal models (isolated, local or global). In total, 324 and 720 populations were exposed to stressful conditions caused by sulphur and salinity, respectively. The results showed that the dispersal modes and the environmental deterioration rates modulated the occurrence of ER events. It has been observed that dispersal favours ER events for both selective agents. Regarding the rate of environmental change, we observed an increase of ER events under constant changes in the populations exposed to sulphur stress. However, ER events were higher when there was previous deterioration (i.e., slow environmental change rate) under saline stress. As a conclusion, ER events in M. aeruginosa depend on selective agent, being the probability higher for salinity than for sulphur. Thus, it could be hypothesized that general conclusions in ER studies must take into account the selective agent.This work has been financially supported by the projects CGL2014- 53682-P (Ministerio de Economía y Competitividad) and CGL2017-87314-P (Ministerio de Economía, Industria y Competitividad), and the Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech

    Error Mitigation Using Approximate Logic Circuits: A Comparison of Probabilistic and Evolutionary Approaches

    Get PDF
    Technology scaling poses an increasing challenge to the reliability of digital circuits. Hardware redundancy solutions, such as triple modular redundancy (TMR), produce very high area overhead, so partial redundancy is often used to reduce the overheads. Approximate logic circuits provide a general framework for optimized mitigation of errors arising from a broad class of failure mechanisms, including transient, intermittent, and permanent failures. However, generating an optimal redundant logic circuit that is able to mask the faults with the highest probability while minimizing the area overheads is a challenging problem. In this study, we propose and compare two new approaches to generate approximate logic circuits to be used in a TMR schema. The probabilistic approach approximates a circuit in a greedy manner based on a probabilistic estimation of the error. The evolutionary approach can provide radically different solutions that are hard to reach by other methods. By combining these two approaches, the solution space can be explored in depth. Experimental results demonstrate that the evolutionary approach can produce better solutions, but the probabilistic approach is close. On the other hand, these approaches provide much better scalability than other existing partial redundancy techniques.This work was supported by the Ministry of Economy and Competitiveness of Spain under project ESP2015-68245-C4-1-P, and by the Czech science foundation project GA16-17538S and the Ministry of Education, Youth and Sports of the Czech Republic from the National Programme of Sustainability (NPU II); project IT4Innovations excellence in science - LQ1602

    Analytical Solutions to Minimum-Norm Problems

    Get PDF
    For G is an element of Rmxn and g is an element of Rm, the minimization min parallel to G psi-g parallel to 2, with psi is an element of Rn, is known as the Tykhonov regularization. We transport the Tykhonov regularization to an infinite-dimensional setting, that is min parallel to T(h)-k parallel to, where T:H -> K is a continuous linear operator between Hilbert spaces H,K and h is an element of H,k is an element of K. In order to avoid an unbounded set of solutions for the Tykhonov regularization, we transform the infinite-dimensional Tykhonov regularization into a multiobjective optimization problem: min parallel to T(h)-k parallel to andmin parallel to h parallel to. We call it bounded Tykhonov regularization. A Pareto-optimal solution of the bounded Tykhonov regularization is found. Finally, the bounded Tykhonov regularization is modified to introduce the precise Tykhonov regularization: min parallel to T(h)-k parallel to with parallel to h parallel to=alpha. The precise Tykhonov regularization is also optimally solved. All of these mathematical solutions are optimal for the design of Magnetic Resonance Imaging (MRI) coils

    Pareto Optimality for Multioptimization of Continuous Linear Operators

    Get PDF
    This manuscript determines the set of Pareto optimal solutions of certain multiobjective-optimization problems involving continuous linear operators defined on Banach spaces and Hilbert spaces. These multioptimization problems typically arise in engineering. In order to accomplish our goals, we first characterize, in an abstract setting, the set of Pareto optimal solutions of any multiobjective optimization problem. We then provide sufficient topological conditions to ensure the existence of Pareto optimal solutions. Next, we determine the Pareto optimal solutions of convex max-min problems involving continuous linear operators defined on Banach spaces. We prove that the set of Pareto optimal solutions of a convex max-min of form max parallel to T(x)parallel to, min parallel to x parallel to coincides with the set of multiples of supporting vectors of T. Lastly, we apply this result to convex max-min problems in the Hilbert space setting, which also applies to convex max-min problems that arise in the design of truly optimal coils in engineering

    Cysteine String Protein- Prevents Activity-Dependent Degeneration in GABAergic Synapses

    Get PDF
    The continuous release of neurotransmitter could be seen to place a persistent burden on presynaptic proteins, one that could compromise nerve terminal function. This supposition and the molecular mechanisms that might protect highly active synapses merit investigation. In hippocampal cultures from knock-out mice lacking the presynaptic cochaperone cysteine string protein-_ (CSP-_),weobserve progressive degeneration of highly active synaptotagmin 2 (Syt2)-expressing GABAergic synapses, but surprisingly not of glutamatergic terminals. In CSP-_ knock-out mice, synaptic degeneration of basket cell terminals occurs in vivo in the presence of normal glutamatergic synapses onto dentate gyrus granule cells. Consistent with this, in hippocampal cultures from these mice, the frequency of miniature IPSCs, caused by spontaneous GABA release, progressively declines, whereas the frequency of miniature excitatory AMPA receptormediated currents (mEPSCs), caused by spontaneous release of glutamate, is normal. However, the mEPSC amplitude progressively decreases. Remarkably, long-term block of glutamatergic transmission in cultures lacking CSP-_ substantially rescues Syt2-expressing GABAergic synapses from neurodegeneration. These findings demonstrate that elevated neural activity increases synapse vulnerability and that CSP-_ is essential to maintain presynaptic function under a physiologically high-activity regimen

    Characterization of the cyanobacterium Oscillatoria sp. isolated from extreme sulphureous water from Los Baños de la Hedionda (S Spain)

    Get PDF
    Los Baños de la Hedionda (Málaga, S Spain) is a natural sulphureous spa (150-200 µM sulphide). Although this high sulphide levels can affect the photosynthetic process, there are numerous photosynthetic microorganisms inhabiting the spa. Among them, we isolated a strain of the cyanobacterium Oscillatoria sp., a genus well known by its tolerance to sulphide. Objectives Firstly, to analyze the photosynthetic characteristics and growth rate of the isolated strain, as well as the effect of the presence of sulphide in both processes. Secondly, to determine the limit of genetic adaptation of this strain to sulphide. Methods The resistance of the isolated strain to sulphide was studied by analyzing the effect of increasing sulphide levels (up to 1600 µM) on photosynthetic performance and growth. The limit of genetic adaptation was explored using an evolutionary experimental design named as ratchet protocol. This design allows discerning the maximum capacity of genetic adaptation of Oscillatoria sp. to the exposure of increasing doses of sulphide Conclusions The strain showed maximum growth rates at 200 µM sulphide although reduced rates can be found up to 800 µM sulphide. A significant increase in resistance was achieved in all derived populations during the ratchet experiment (surviving at sulphide concentrations higher than 2 mM). Moreover, they showed different evolutionary potential to adapt to sulphide, depending on historical contingency.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech.Spanish Ministry of Science and Innovation through CGL2014-53682-P project. Predoctoral State Grant from Scientific and Technical Research and Innovation Plan, Spanish Ministry of Economy, Industry and Competitiveness I+D+i ECC/1402/2013, 201
    corecore